15 research outputs found

    SynGraphy: Succinct Summarisation of Large Networks via Small Synthetic Representative Graphs

    Full text link
    We describe SynGraphy, a method for visually summarising the structure of large network datasets that works by drawing smaller graphs generated to have similar structural properties to the input graphs. Visualising complex networks is crucial to understand and make sense of networked data and the relationships it represents. Due to the large size of many networks, visualisation is extremely difficult; the simple method of drawing large networks like those of Facebook or Twitter leads to graphics that convey little or no information. While modern graph layout algorithms can scale computationally to large networks, their output tends to a common "hairball" look, which makes it difficult to even distinguish different graphs from each other. Graph sampling and graph coarsening techniques partially address these limitations but they are only able to preserve a subset of the properties of the original graphs. In this paper we take the problem of visualising large graphs from a novel perspective: we leave the original graph's nodes and edges behind, and instead summarise its properties such as the clustering coefficient and bipartivity by generating a completely new graph whose structural properties match that of the original graph. To verify the utility of this approach as compared to other graph visualisation algorithms, we perform an experimental evaluation in which we repeatedly asked experimental subjects (professionals in graph mining and related areas) to determine which of two given graphs has a given structural property and then assess which visualisation algorithm helped in identifying the correct answer. Our summarisation approach SynGraphy compares favourably to other techniques on a variety of networks.Comment: 24 page

    REWOrD: Semantic relatedness in the web of data

    No full text
    This paper presents REWOrD, an approach to compute semantic relatedness between entities in the Web of Data representing real word concepts. REWOrD exploits the graph nature of RDF data and the SPARQL query language to access this data. Through simple queries, REWOrD constructs weighted vectors keeping the informativeness of RDF predicates used to make statements about the entities being compared. The most informative path is also considered to further refine informativeness. Relatedness is then computed by the cosine of the weighted vectors. Differently from previous approaches based on Wikipedia, REWOrD does not require any preprocessing or custom data transformation. Indeed, it can leverage whatever RDF knowledge base as a source of background knowledge. We evaluated REWOrD in different settings by using a new dataset of real word entities and investigate its flexibility. As compared to related work on classical datasets, REWOrD obtains comparable results while, on one side, it avoids the burden of preprocessing and data transformation and, on the other side, it provides more flexibility and applicability in a broad range of domains. Copyright © 2012, Association for the Advancement of Artificial Intelligence. All rights reserved

    A semantic similarity metric combining features and intrinsic information content

    No full text
    In many research fields such as Psychology, Linguistics, Cognitive Science and Artificial Intelligence, computing semantic similarity between words is an important issue. In this paper a new semantic similarity metric, that exploits some notions of the feature-based theory of similarity and translates it into the information theoretic domain, which lever-ages the notion of Information Content (IC), is presented. In particular, the proposed metric exploits the notion of intrinsic IC which quantifies IC values by scrutinizing how concepts are arranged in an ontological structure. In order to evaluate this metric, an on line experiment asking the community of researchers to rank a list of 65 word pairs has been conducted. The experiment's web setup allowed to collect 101 similarity ratings and to differentiate native and non-native English speakers. Such a large and diverse dataset enables to confidently evaluate similarity metrics by correlating them with human assessments. Experimental evaluations using WordNet indicate that the proposed metric, coupled with the notion of intrinsic IC, yields results above the state of the art. Moreover, the intrinsic IC formulation also improves the accuracy of other IC-based metrics. In order to investigate the generality of both the intrinsic IC formulation and proposed similarity metric a further evaluation using the MeSH biomedical ontology has been performed. Even in this case significant results were obtained. The proposed metric and several others have been implemented in the Java WordNet Similarity Library. (C) 2009 Elsevier B.V. All rights reserved

    UFOme: An ontology mapping system with strategy prediction capabilities

    No full text
    Ontology mapping, or matching, aims at identifying correspondences among entities in different ontologies. Several strands of research come up with algorithms often combining multiple mapping strategies to improve the mapping accuracy. However, few approaches have systematically investigated the requirements of a mapping system both from the functional (i.e., the features that are required) and user point of view (i.e., how the user can exploit these features). This paper presents an ontology mapping software framework that has been designed and implemented to help users (both expert and non-expert) in designing and/or exploiting comprehensive mapping systems. It is based on a library of mapping modules implementing functions such as discovering mappings or evaluating mapping strategies. In particular, the strategy predictor module of the designed framework, for each specific mapping task, can "predict" mapping modules to be exploited and parameter values (e.g., weights and thresholds). The implemented system, called UFOme, assists users during the various phases of a mapping task execution by providing a user friendly ontology mapping environment. The UFOme implementation and its prediction capabilities and accuracy were evaluated on the Ontology Alignment Evaluation Initiative tests with encouraging results. (C) 2009 Elsevier B.V. All rights reserved

    Semantic flow networks: Semantic interoperability in networks of ontologies

    No full text
    In an open context such as the Semantic Web, information providers usually rely on different ontologies to semantically characterize contents. In order to enable interoperability at a semantic level, ontologies underlying information sources must be linked by discovering alignments, that is, set of correspondences or mappings. The aim of this paper is to provide a formal model (i.e., Semantic Flow Networks) to represent networks of ontologies and alignments with the aim to investigate the problem of composite mapping discovery. Semantic Flow Networks (SFN) differ from other models of networks of ontologies for two main aspects. SFN consider constraints over mappings that are necessary to take into account their dependencies. Moreover, a different notion of mapping, that is, compound mapping is considered. Complexity results and a CSP formulation for composite mapping discovery are provided. © 2012 Springer-Verlag

    Semantic navigation on the web of data: Specification of routes, web fragments and actions

    No full text
    The massive semantic data sources linked in theWeb of Data give new meaning to old features like navigation; introduce new challenges like semantic specification of Web fragments; and make it possible to specify actions relying on semantic data. In this paper we introduce a declarative language to face these challenges. Based on navigational features, it is designed to specify fragments of theWeb of Data and actions to be performed based on these data. We implement it in a centralized fashion, and show its power and performance. Finally, we explore the same ideas in a distributed setting, showing their feasibility, potentialities and challenges

    A framework for distributed knowledge management: Design and implementation

    No full text
    This paper describes a framework for implementing distributed ontology-based knowledge management systems (DOKMS). The framework, in particular, focuses on knowledge management within organizations. It investigates the functional requirements to enable Individual Knowledge Workers (IKWs) and distributed communities (e.g., project teams) to create, manage and share knowledge with the support of ontologies. On the one hand, the framework enables distributed and collaborative work by relying on a P2P virtual office model. On the other hand, it provides a multi-layer ontology framework to enable semantics-driven knowledge processing. The ontology framework allows organizational knowledge to be modeled at different levels. An Upper Ontology is exploited to establish a common organizational knowledge background. A set of Workspace Ontologies can be designed to manage, share and search knowledge within communities by the establishment of a contextual (i.e., related to the aim of a group) understanding. Finally, Personal Ontologies support IKWs in personal knowledge management activities. We present an implementation of the designed framework in the K-link+ system and show the suitability of this approach through a use case. The evaluation of K-link+ in a real network is also discussed. (C) 2009 Elsevier B.V. All rights reserved
    corecore